CAG Logo
Covenant Advisory Group Limited

Vendor Due Diligence Guide

Smarter Contracts. Sharper Outcomes.

Vendor Due Diligence: What to Ask and Why

Structured, risk-based diligence for SMEs, fintech/regtech, and startups.

DisclaimerRead Before Use

This guide provides general information to support vendor due diligence and contracting. It is not legal, regulatory, accounting, or risk advice, and it does not create a lawyer–client relationship. Laws and regulations vary by jurisdiction and evolve frequently, especially in financial services, privacy, cyber, and AI. You should obtain advice tailored to your circumstances, sector, and geography before taking or refraining from any action. Use of this guide is at your own risk; no representation or warranty is made as to completeness or fitness for a particular purpose.

#01

1. Corporate Profile and Ownership

Asks

  • Provide full legal name, registered address, trading names, corporate structure, and jurisdictions of incorporation/operation.
  • Identify beneficial owners (≥25%), directors, and any Politically Exposed Persons (PEPs).
  • Disclose group structure, subsidiaries used for service delivery, and any reliance on subcontractors or affiliates.
  • Confirm registration numbers, tax IDs, and any foreign registrations where services are performed.
  • Provide a litigation and regulatory actions summary for the last five years.
  • Disclose any economic sanctions exposure or ownership links to sanctioned parties or embargoed jurisdictions.
  • Explain governance model: board composition, independent directors, committees (audit/risk), and meeting cadence.
  • Confirm change‑of‑control history and any contemplated M&A, restructuring, or divestments that could affect service continuity.

Why it matters

Understanding legal identity, control, and footprint reduces counterparty, sanctions, and legal enforcement risks. Beneficial ownership checks help detect conflicts, corruption risks, and sanctions issues. Group structure and subcontracting affect data flows, service resilience, and governing law. Litigation history signals potential pattern risks. Governance maturity correlates with decision quality, internal control strength, and responsiveness in crises.

Evidence to request

Corporate registry extracts, cap table or ownership attestation, organizational chart, director/PEP screening results, sanctions screening outputs, litigation/regulatory summary on letterhead, and tax registration certificates. Board/committee charters, recent board minutes (redacted), and corporate governance policy.

How to assess responses

Seek consistency between public registries and vendor disclosures. Validate beneficial ownership using independent databases and check for PEP/sanctions hits. Evaluate whether governance structures are proportionate to company scale and risk. Consider whether group structure or cross‑border operations introduce enforcement or data transfer constraints.

Red flags & follow-ups

Opaque or shifting ownership; nominee shareholders without rationale; links to sanctioned or high‑risk jurisdictions; frequent restructuring; material undisclosed litigation; absence of governance documentation; resistance to beneficial ownership verification.

#02

2. Financial Stability and Insurance

Asks

  • Provide audited financial statements for the last two years (or management accounts and cash runway statement for startups).
  • Disclose debt facilities, covenants, and any going‑concern qualifications.
  • Provide key financial KPIs: revenue, gross margin, operating cash flow, burn rate, months of runway.
  • Confirm availability and limits of insurance: professional indemnity/E&O, cyber, tech E&O, general liability, employers’ liability, D&O, crime/fidelity, media liability (as applicable).
  • Identify deductibles, exclusions, retroactive dates, and “claims‑made” vs “occurrence” basis.
  • Describe treasury management, cash controls, diversification of banking partners, and FX risk management if transacting cross‑border.
  • Provide details of contingency funding options (e.g., committed lines, parent support) and the vendor’s financial contingency/solvency plan.

Why it matters

Financial resilience and appropriate insurance indicate the vendor’s ability to deliver and to absorb losses or claims. Early‑stage companies may be strong technically but fragile financially; runway and contingency planning are critical. Insurance structure determines recovery prospects if something goes wrong. Concentration of cash or covenant pressure can become operational risk; robust treasury and contingency planning increase survivability.

Evidence to request

Audited accounts or management accounts, bank reference or proof of funds (as appropriate), insurance certificates with wording summaries and endorsements, broker letter of authenticity. Schedule of covenants, management discussion and analysis, list of insurers and policy wordings (summaries), and solvency/contingency plan.

How to assess responses

For startups, triangulate runway (cash/burn) against growth and hiring plans. Review covenant headroom and sensitivity to downside scenarios. Scrutinize insurance exclusions (e.g., war, failure to maintain security) against your risk profile. Consider insurer credit quality and notice of cancellation provisions.

Red flags & follow-ups

Going‑concern qualifications; negative operating cash flow without credible funding path; narrow insurance limits relative to potential exposure; high deductibles or material exclusions for cyber or IP; reliance on a single bank or uncommitted credit; unwillingness to share broker attestations.

#03

3. Compliance, Licensing, and Regulatory Status

Asks

  • Confirm all licenses and registrations required to deliver the services, by jurisdiction.
  • For fintech/regtech: disclose authorisations (e.g., e‑money, payments, investment services, credit reference, KYC/AML service provider status), appointed representative arrangements, passporting, outsourcing notifications, and supervisory interactions.
  • Provide compliance policies: AML/CTF, sanctions, anti‑bribery and corruption, conflicts of interest, whistleblowing, data protection, trade controls.
  • State compliance ownership: name of MLRO/Compliance Officer, reporting lines, and board oversight.
  • Provide outcomes of recent regulatory audits/inspections, including any remedial actions.
  • Describe compliance monitoring and testing plan, breach/issue management, and board reporting cadence.
  • Confirm horizon‑scanning and regulatory change management processes, including mapping to control updates.

Why it matters

Unlicensed activity can trigger contractual illegality, regulatory penalties, and reputational harm. Robust compliance frameworks reduce enforcement and operational risks, especially in financial services and data‑intensive operations. Demonstrable monitoring, issue remediation, and regulatory engagement reduce residual risk.

Evidence to request

License copies, regulatory letters, policy documents, training records, compliance monitoring plans, board minutes noting compliance reporting. Annual compliance plan, breach/issue logs, and remediation tracking.

How to assess responses

Check licensing scope against the actual services and jurisdictions used in delivery. Validate supervisory interactions and review remediation evidence for closed findings. Evaluate compliance independence, resourcing, and authority.

Red flags & follow-ups

Operating in gray areas without counsel analysis; material past findings with weak remediation; policy libraries without monitoring; compliance reporting into operations without escalation rights; lack of change management for new regulations.

#04

4. Information Security and Cyber Risk

Asks

  • Provide information security certifications (ISO 27001, SOC 2 Type II, PCI DSS, CSA STAR) and the latest audit reports.
  • Describe security governance: CISO role, security org chart, risk management processes, and alignment to frameworks (NIST CSF, ISO 27001, CIS).
  • Explain data segregation, encryption (in transit/at rest), key management, secrets handling, and CI/CD security controls.
  • Provide details on identity and access management, including MFA coverage, least privilege, and joiner/mover/leaver processes.
  • Disclose vulnerability management cadence, patching SLAs, penetration tests, red team results, and remediation tracking.
  • Describe network architecture, segmentation, use of zero trust principles, and endpoint protection (EDR/XDR).
  • Provide incident response plan, detection capabilities (SIEM/SOAR), and breach history in the last three years.
  • Confirm third‑party risk management, including due diligence and continuous monitoring of subcontractors.
  • For cloud services: specify CSPs, regions, tenancy model, and shared responsibility assumptions.
  • Detail backup strategy, immutable backups, ransomware resilience, and recovery testing outcomes.

Why it matters

Information security failures drive costly incidents, regulatory breaches, and outages. Independently audited controls provide assurance. Understanding architecture and process maturity helps evaluate likelihood and impact of compromise. Resilience to modern threats (e.g., ransomware, supply‑chain attacks) is a differentiator.

Evidence to request

SOC 2 reports with management response, ISO certificates, pen test executive summaries, IR plan, security policies, incident logs/summary metrics, vulnerability scans, architectural diagrams (sanitized). BCP/DR test results focusing on cyber scenarios, IAM metrics (MFA coverage), and backup/recovery test reports.

How to assess responses

Corroborate certifications (certificate numbers, dates, scope). Look for quantified control coverage (e.g., MFA ≥ 95% of privileged accounts). Review time‑to‑patch metrics and pen test remediation closure rates. Map cloud shared‑responsibility boundaries to contract obligations.

Red flags & follow-ups

Expired or in‑progress certifications without dates; patch backlog; admin access without MFA; broad production access for contractors; no tabletop IR exercises; unclear ransomware playbooks; limited visibility into fourth‑party security.

#05

5. Data Protection and Privacy

Asks

  • Identify personal data categories processed, special category data, children’s data, and any profiling or automated decision‑making.
  • Provide the data processing locations, data transfer mechanisms (e.g., SCCs, BCRs), and sub‑processor list with roles and locations.
  • Provide DPA terms, including controller/processor roles, instructions, confidentiality, deletion/return, audit rights, and assistance obligations.
  • Describe privacy governance, DPIA approach, ROPA, retention schedules, and data subject rights processes.
  • Confirm security of personal data, anonymization/pseudonymization practices, and encryption standards.
  • Disclose data breaches and regulatory notifications for the last three years.
  • For AI features: describe model training data sources, IP licensing status, data minimization, opt‑outs, and safeguards for bias, explainability, and output filtering.
  • Explain consent and transparency mechanisms, cookie/SDK tracking controls, and handling of scraping or third‑party data sources.

Why it matters

Compliance with data protection laws is mandatory and high‑impact. Data flows and sub‑processing determine transfer risk. AI‑related data practices attract heightened scrutiny from regulators and enterprise customers. Transparency, minimization, and lawful bases are decisive in regulator evaluations and enterprise procurement.

Evidence to request

Completed DPIAs, ROPA extracts, sub‑processor register, DPA schedule, SCCs, privacy policy, deletion certificates, data mapping diagrams, breach logs. DSR metrics (volumes, SLAs), training logs, and transfer impact assessments where applicable.

How to assess responses

Check that roles (controller vs processor) are clearly defined and consistent with actual processing. Validate sub‑processor notifications and objection rights. Review breach post‑mortems and regulator correspondence for lessons learned and control upgrades. Ensure transfer mechanisms align to current law and guidance.

Red flags & follow-ups

Unmapped data flows; vague sub‑processor lists; refusal of audit/inspection rights; broad rights to train models on your data without opt‑out; inadequate deletion/return terms; repeat breaches without systemic remediation.

#06

6. Product and Technology

Asks

  • Provide a high‑level architecture, dependencies, technology stack, and environment topology (dev/test/stage/prod).
  • Describe SDLC practices: secure coding, code reviews, SAST/DAST, dependency scanning, SBOM, change management, and release governance.
  • Provide uptime/SLA history for the last 12–24 months, RTO/RPO targets, and DR/BCP test results.
  • Explain roadmap governance, product versioning, deprecation policy, and end‑of‑life support.
  • Disclose use of open‑source components and licenses; confirm vulnerability management and license compliance processes.
  • For APIs: provide documentation, rate limits, auth methods (OAuth 2.0/OIDC), webhooks security, and sandbox availability.
  • For fintech/regtech: explain model accuracy/validation, explainability, audit trails, and traceability for regulatory reporting and surveillance tools.
  • Describe observability: logging, metrics, tracing (e.g., OpenTelemetry), SLOs/error budgets, and on‑call/incident management practices.

Why it matters

Technology maturity and reliability underpin service quality and resilience. Poor SDLC and change control are leading causes of outages and vulnerabilities. OSS and API posture drive integration risk and legal exposure. Strong observability and disciplined operations reduce MTTR and customer impact.

Evidence to request

Architecture diagrams, SDLC policy, SOC/SIEM dashboards screenshots, DR test reports, SLA reports, SBOM, OSS license register, API docs. Incident post‑mortems, SLO dashboards, and change approval records.

How to assess responses

Examine SBOMs for high‑severity CVEs and patch latency. Check deprecation policies for backward compatibility windows. Validate API auth flows and rate limits against your expected volumes. Review post‑incident actions for systemic fixes, not just one‑offs.

Red flags & follow-ups

No SBOM or OSS governance; frequent breaking changes; single‑tenant promises without isolation details; fragile release processes (e.g., no canary/rollbacks); limited run‑time telemetry; outdated libraries.

#07

7. Operational Resilience, Business Continuity, and Disaster Recovery

Asks

  • Provide business impact analysis (BIA), BCP/DR plans, and last test dates with outcomes.
  • Specify RTO/RPO commitments and dependency mapping (people, tech, facilities, third parties).
  • Describe capacity management, monitoring, and scaling strategies; cloud failover regions and automation.
  • Confirm arrangements for key person risk, knowledge management, and succession planning.
  • Provide pandemic/health and safety protocols and site resilience where physical presence matters.
  • Describe crisis management structure, communications plans, and regulatory/customer notification playbooks.

Why it matters

Resilience safeguards against outages and continuity failures. Proven testing and realistic recovery objectives reduce disruption to your operations and customers. Coordinated crisis and communications reduce secondary harm and regulatory exposure.

Evidence to request

BCP/DR plans, BIA summary, test reports, resilience metrics, dependency registers, rota and succession plans. Crisis comms templates and after‑action reviews.

How to assess responses

Check that RTO/RPO align with your business needs and contractual SLAs. Verify regular, scenario‑based testing (including cyber, cloud region loss, vendor outage). Evaluate cross‑functional participation and lessons‑learned implementation.

Red flags & follow-ups

Plans untested or last tested >12 months; RTO/RPO not contractually backed; single‑region cloud deployments without failover; undocumented dependencies; hero‑culture reliance on key individuals.

#08

8. Legal and Contractual

Asks

  • Provide standard contract terms and confirm willingness to negotiate key clauses: limitation of liability, indemnities, IP, data protection, audit, assignment, termination for convenience, step‑in/escrow, and change control.
  • Confirm governing law and jurisdiction, dispute resolution (escalation, mediation, arbitration), and service levels as contractual commitments with credits.
  • Disclose material contracts or upstream restrictions affecting your use, reselling, or export.
  • Provide IP ownership chain for deliverables and pre‑existing materials; confirm non‑infringement and licensing rights.
  • For software: clarify license scope, user metrics, environment restrictions, bundling, and audit rights.
  • For AI and models: define training rights on your data, output ownership, indemnities for infringement, and hallucination/error disclaimers.
  • Confirm audit and penetration testing rights, frequency, and constraints; specify rights to obtain third‑party assurance reports.

Why it matters

Contracts allocate risk. Negotiating the right baseline prevents gaps that are expensive to fix once services are embedded. IP clarity avoids disputes and resale/scale constraints. Assurance rights and well‑drafted SLAs underpin enforceability and oversight.

Evidence to request

Master services agreement (MSA), order forms, SOW templates, DPAs, acceptable use policy, open‑source attributions, escrow agreements, license audits policy. Sample SLA schedules, security/privacy appendices, and pro‑forma change control.

How to assess responses

Map contractual positions to operational reality (e.g., SLA measurement, audit practicalities). Compare liability caps to credible exposure and ensure appropriate carve‑outs (e.g., data breach/IP/intentional misconduct). Check upstream constraints for pass‑through limitations that affect your intended use.

Red flags & follow-ups

One‑sided liability caps below insurance; no indemnity for IP or data breach; broad disclaimers nullifying SLAs; refusal of reasonable audit rights; click‑wrap terms overriding negotiated terms; ambiguous ownership of outputs or data.

#09

9. Commercials, Pricing, and Value for Money

Asks

  • Provide pricing model, unit economics, discounts, and volumetric assumptions.
  • Disclose uplifts, indexation, and step‑downs or floors on year‑over‑year pricing.
  • Confirm auto‑renewal triggers, notice periods, and price protection on renewals.
  • Provide service bundle components, optional modules, and hidden costs (implementation, training, data migration, overages).
  • Explain commercial incentives for performance (service credits, earn‑backs) and alignment to KPIs.
  • For startups: request contingency pricing or escrow for prepayments; for critical services consider performance bonds or parent guarantees.
  • Describe exit fees, data export costs, and any costs tied to regulatory changes or security enhancements.

Why it matters

Transparent pricing avoids budget shocks and misaligned incentives. Renewal dynamics often drive total cost of ownership. Commercial levers provide recourse for underperformance. Clarity on exit costs prevents lock‑in.

Evidence to request

Pricing sheets, sample invoices, renewal policies, calculation examples, and benchmarking comparisons where permitted. Change order pricing matrix and professional services rate cards.

How to assess responses

Model TCO under realistic usage scenarios and growth. Stress‑test renewal clauses and indexation against inflation. Align credits with the severity of impact and ensure they do not operate as exclusive remedies.

Red flags & follow-ups

Auto‑renewals with short notice; steep uplifts; ambiguous metrics (e.g., MAU definitions); punitive overage fees; separate charges for essential security features; non‑transparent export/transition fees.

#10

10. People, Culture, and HR Compliance

Asks

  • Provide headcount by function, location, and employment model (employee/contractor).
  • Disclose background checks, right‑to‑work verification, and screening levels for privileged roles.
  • Provide training programs and completion rates: security, privacy, AML/CTF, conduct, anti‑harassment.
  • Confirm diversity and inclusion policies, whistleblowing, and speak‑up mechanisms.
  • For offshoring/nearshoring: explain employment law compliance, worker classification, and TUPE/ARD risk for insourcing transitions.
  • Describe access segregation between employees and contractors and supervision of high‑privilege roles.

Why it matters

People risks frequently underpin security, delivery, and regulatory failures. Strong culture and controls reduce error, fraud, and turnover impact. Proper screening, segregation of duties, and culture programs mitigate insider threats.

Evidence to request

Org charts, HR policies, training logs, background check policy, contractor agreements, code of conduct. Access review summaries for privileged roles and whistleblowing policy usage metrics.

How to assess responses

Cross‑check training completion and recertification cadence. Validate that contractors receive equivalent controls and training. Review attrition rates in critical teams and succession plans.

Red flags & follow-ups

No screening for privileged roles; weak joiner/mover/leaver processes; cultural issues evidenced by whistleblowing inactivity or retaliation; heavy reliance on unsupervised contractors; high turnover in SRE/security.

#11

11. Environmental, Social, and Governance (ESG)

Asks

  • Provide ESG policy, governance, and reporting frameworks (e.g., TCFD, CSRD readiness, SASB).
  • Disclose carbon footprint metrics (Scopes 1–3 where available) and reduction targets.
  • Describe supply chain ethics, modern slavery compliance, conflict minerals (if relevant), and vendor standards.
  • Provide DEI metrics or initiatives, community impact, and charitable programs.
  • Confirm environmental certifications and data center energy practices for cloud/hosting providers.
  • Explain governance oversight (board/senior management) and integration of ESG into procurement and product decisions.

Why it matters

ESG affects brand, stakeholder expectations, and— increasingly—legal obligations. For technology stacks, energy efficiency and ethical sourcing are material considerations. Governance integration signals seriousness and durability of commitments.

Evidence to request

ESG policy, sustainability report, supplier code of conduct, modern slavery statement, certificates (e.g., ISO 14001), DC energy usage reports. Supplier audit results and corrective action plans.

How to assess responses

Check targets for specificity and timelines. Validate data center energy claims with third‑party attestations. Review supplier codes for enforceable standards and remediation mechanisms.

Red flags & follow-ups

Aspirational statements without metrics; Scope 3 avoidance where material; no supplier due diligence; greenwashing indicators; no governance ownership.

#12

12. Anti‑Financial Crime (AFC)

Asks

  • Provide AML/CTF and sanctions policies, risk assessment methodology, customer and transaction screening tools, and case management systems.
  • Disclose typologies addressed, rule governance, model validation and tuning, audit trail, and QA processes.
  • Confirm training, suspicious activity reporting metrics, and law enforcement liaison processes.
  • For regtech vendors: describe algorithm transparency, explainability, back‑testing, false positive rates, and PEP/sanctions list sources and update cycles.
  • Explain data lineage and quality controls for KYC/transaction data and how model changes are governed.

Why it matters

AFC failures carry acute regulatory and reputational consequences. For fintechs/regtechs, model risk and explainability are central. Data quality and governance are often root causes of detection failures and false positives.

Evidence to request

Policies, enterprise risk assessments, system architecture, validation reports, sample MI/QA dashboards, SAR process documents. Model change logs, performance metrics over time, and independent validation reviews.

How to assess responses

Review how typologies map to customer risk and geographies. Check governance for model approvals and periodic validations. Inspect quality assurance and SAR timeliness metrics.

Red flags & follow-ups

Static rules without tuning; opaque models; outdated lists; inadequate QA; missing audit trails; high false positives without remediation; insufficient data provenance.

#13

13. Intellectual Property and Content Risk

Asks

  • Confirm ownership of software, datasets, and models; list third‑party components and license terms.
  • Describe IP clearance processes for brand, content, and training datasets.
  • Provide indemnities and caps specific to IP infringement; confirm no open‑source copyleft contagion on deliverables unless agreed.
  • For generative AI: explain handling of copyrighted material, dataset provenance, output filtering, and opt‑out capabilities.
  • Clarify rights to derivative works, feedback, and improvements, and whether such rights are shared or assigned.

Why it matters

IP disputes can disrupt service and create significant liability. Clarity on third‑party rights and licensing keeps products deployable and defensible. Feedback/improvement rights affect long‑term competitiveness and lock‑in.

Evidence to request

IP registers, license schedules, OSS register, indemnity clause excerpts, dataset provenance documentation. Patent/assignment agreements for key contributors and evidence of contractor IP assignments.

How to assess responses

Trace ownership of core assets through assignments and contributor agreements. Assess OSS license compatibility. For AI, evaluate dataset provenance and any opt‑out or indemnity coverage for training on your data.

Red flags & follow-ups

Gaps in contractor assignments; reliance on restricted datasets; copyleft contagion risk; narrow or eroding indemnities; broad vendor claims over your data/outputs; unclear feedback rights.

#14

14. Subcontractors, Fourth Parties, and Supply Chain

Asks

  • Provide a complete list of subcontractors and sub‑processors, their roles, locations, and access levels.
  • Confirm flow‑down of contractual obligations, including security, privacy, audit, and termination rights.
  • Describe onboarding/offboarding, monitoring, and contingency plans for critical subcontractors.
  • Disclose concentration risks and exit/transition support in the event a subcontractor fails.
  • Explain geo‑location strategies and data residency implications for sub‑processing.

Why it matters

Your risk extends into the vendor’s supply chain. Visibility and control over fourth parties are now standard practice for regulators and enterprise customers. Geographic choices affect latency, compliance, and resilience.

Evidence to request

Subcontractor register, sample flow‑down clauses, monitoring reports, termination playbooks. Sub‑processor change logs and notification history.

How to assess responses

Check that flow‑downs mirror your contractual requirements. Review monitoring cadence (e.g., SOC reports collection, security questionnaires) and termination readiness for critical dependencies.

Red flags & follow-ups

Undisclosed critical subcontractors; weak flow‑downs; sub‑processors in high‑risk jurisdictions; no contingency plans; frequent changes without notice.

#15

15. Ethics, Conduct, and Responsible AI

Asks

  • Provide codes of ethics and conduct, conflicts‑of‑interest policies, gifts/hospitality registers, and anti‑corruption training metrics.
  • For AI systems: share responsible AI framework, risk classification, human‑in‑the‑loop controls, bias testing, model cards, and incident reporting for AI failures.
  • Confirm adherence to industry standards or emerging regulations affecting AI, data scraping, and biometric data.
  • Describe escalation routes for ethical concerns and protections for reporters.

Why it matters

Ethical lapses and irresponsible AI practices create legal and reputational risk. Governance signals maturity and long‑term viability. Effective speak‑up culture uncovers issues early.

Evidence to request

Ethics policy, conflicts register process, AI governance charter, bias and robustness test summaries. Hotline metrics and investigation procedures.

How to assess responses

Evaluate independence of ethics oversight and integration with compliance and risk. For AI, look for documented risk classifications, pre‑deployment reviews, and post‑deployment monitoring.

Red flags & follow-ups

Paper policies without training; retaliation risks; no AI incident management; lack of data provenance for AI; absence of human oversight where required.

#16

16. Service Levels, KPIs, and Reporting

Asks

  • Provide defined SLAs, KPIs, and service credits; confirm measurement methodology and independent validation.
  • Disclose incident classification, escalation timelines, communications templates, and post‑incident review cadence.
  • Provide dashboard samples and reporting frequency; confirm rights to raw logs/telemetry for verification.
  • Clarify maintenance windows, change freeze policies, and customer notification periods.
  • Explain calculation of availability (exclusions, planned maintenance, partial outages) and error budget policies.

Why it matters

Measurable performance and transparent reporting enable accountability and timely remediation. Accurate definitions and measurement prevent disputes and ensure meaningful remedies.

Evidence to request

SLA schedules, runbooks, incident communication templates, sample reports. Third‑party uptime attestations and methodology descriptions.

How to assess responses

Ensure SLAs align to business outcomes (not just technical metrics). Validate independence of measurements. Review PIRs for root‑cause depth and prevention measures.

Red flags & follow-ups

Overbroad exclusions; credits capped at trivial amounts; unverifiable measurements; lack of PIRs; opaque maintenance notifications.

#17

17. Onboarding, Implementation, and Change Management

Asks

  • Provide implementation methodology, project governance, change control, and acceptance criteria.
  • Disclose dependencies on your systems, access requirements, and data migration approach.
  • Provide training plans, knowledge transfer, and hypercare support details.
  • Confirm responsibilities demarcation (RACI) and named project leads.
  • Explain configuration vs customization approach and version control for customer‑specific changes.

Why it matters

Poor onboarding creates delays, scope creep, and security gaps. Clear governance and acceptance reduce friction and disputes. Favoring configuration over customization typically lowers long‑term cost and risk.

Evidence to request

Project plans, RACI, acceptance test scripts, migration playbooks, training materials. Cutover plans and rollback procedures.

How to assess responses

Assess realism of timelines and resource commitments. Validate change gates and criteria for acceptance. Review migration rehearsal results and rollback readiness.

Red flags & follow-ups

Undefined acceptance criteria; heavy customization; unclear data migration ownership; missing rollback plans; reliance on a single SME.

#18

18. Exit, Termination, and Transition

Asks

  • Provide termination rights for convenience and cause; notice periods, cure rights, and early termination fees.
  • Confirm transition assistance obligations, data export formats, and deletion timelines with certification.
  • Provide escrow arrangements for critical software and triggers for release.
  • Disclose any non‑compete or non‑solicit provisions impacting future options.
  • Describe knowledge transfer, access retention during transition, and pricing for transition support.

Why it matters

Exit optionality and smooth transition mitigate lock‑in risk and ensure continuity if the relationship ends or your strategy changes. Practical transition support avoids service gaps and data loss.

Evidence to request

Termination clause excerpts, transition schedules, data export specs, escrow agreements. Deletion certificates and sample transition runbooks.

How to assess responses

Ensure export formats are practical and complete. Confirm deletion timelines and certificate forms. Validate escrow triggers and release mechanics for your risk scenarios.

Red flags & follow-ups

High transition fees; partial data exports; short termination notice for convenience; escrow without workable build materials; deletion only on request with no certification.

#19

19. Risk Scoring and Tiering Method

Asks

  • Provide your internal risk tiering and how you propose to align with ours.
  • Share inherent and residual risk assessments by domain (security, privacy, regulatory, financial, operational).
  • Confirm monitoring frequency, triggers for enhanced due diligence, and board reporting.
  • Explain issue/risk acceptance thresholds and compensating controls where gaps exist.

Why it matters

Consistent risk scoring informs depth of diligence and cadence for ongoing monitoring. Alignment reduces friction and prevents blind spots. Transparent risk acceptance standards inform where contractual or monitoring enhancements are needed.

Evidence to request

Risk register extracts, scoring methodology, board or risk committee materials. Open risk logs and remediation plans with target dates.

How to assess responses

Check for calibration across domains and justification for residual risk ratings. Ensure monitoring cadence matches tiering and contractual obligations.

Red flags & follow-ups

No formal methodology; persistent overdue remediation; arbitrary downgrades of inherent risk; lack of board visibility; acceptance of critical risks without compensating controls.

#20

Documentation Checklist

CategoryDocuments to RequestMinimum ExpectationRed Flags
CorporateRegistry extracts, ownership attestations, org chartClear legal identity and ownershipOpaque ownership, PEP/sanctions links
FinancialAudited accounts or runway statement, insurance COIsSolvency and fit‑for‑purpose insuranceGoing‑concern issues, inadequate cyber cover
RegulatoryLicenses, policies, audit outcomesAll necessary authorisationsEnforcement actions, gaps in scope
SecuritySOC 2/ISO 27001, pen test summaries, IR planIndependent assurance and mature controlsNo audits, weak IAM or patching
PrivacyDPA, DPIA, sub‑processor list, SCCsClear roles, lawful transfersUnmapped data flows, breach history without remediation
TechnologyArchitecture, SDLC, SLA/uptimeStable, documented stackNo DR testing, poor change control
LegalMSA/SOW, IP chain, indemnitiesBalanced risk allocationOverbroad disclaimers, onerous lock‑in
CommercialPrice sheets, renewal termsTransparent TCO and protectionsHidden fees, steep uplifts
PeopleHR/training policies, screeningTrained and vetted staffOverreliance on unscreened contractors
ESGPolicies, reports, supplier codeBaseline ESG commitmentsNo policy or metrics
AFCPolicies, system validationEffective controlsNo SAR process, outdated lists

Download

Save as HTML for web, or print to PDF (A4) via your browser.